139 research outputs found

    Deep Learning Applied to the Asteroseismic Modeling of Stars with Coherent Oscillation Modes

    Get PDF
    We develop a novel method based on machine learning principles to achieve optimal initiation of CPU-intensive computations for forward asteroseismic modeling in a multi-D parameter space. A deep neural network is trained on a precomputed asteroseismology grid containing about 62 million coherent oscillation-mode frequencies derived from stellar evolution models. These models are representative of the core-hydrogen burning stage of intermediate-mass and high-mass stars. The evolution models constitute a 6D parameter space and their predicted low-degree pressure- and gravity-mode oscillations are scanned, using a genetic algorithm. A software pipeline is created to find the best fitting stellar parameters for a given set of observed oscillation frequencies. The proposed method finds the optimal regions in the 6D parameters space in less than a minute, hence providing the optimal starting point for further and more detailed forward asteroseismic modeling in a high-dimensional context. We test and apply the method to seven pulsating stars that were previously modeled asteroseismically by classical grid-based forward modeling based on a χ2\chi^2 statistic and obtain good agreement with past results. Our deep learning methodology opens up the application of asteroseismic modeling in +6D parameter space for thousands of stars pulsating in coherent modes with long lifetimes observed by the KeplerKepler space telescope and to be discovered with the TESS and PLATO space missions, while applications so far were done star-by-star for only a handful of cases. Our method is open source and can be used by anyone freely.Comment: Accepted for publication in PASP Speciale Volume on Machine Learnin

    Comparing Galactic Center MSSM dark matter solutions to the Reticulum II gamma-ray data

    Get PDF
    Observations with the Fermi Large Area Telescope (LAT) indicate a possible small photon signal originating from the dwarf galaxy Reticulum II that exceeds the expected background between 2 GeV and 10 GeV. We have investigated two specific scenarios for annihilating WIMP dark matter within the phenomenological Minimal Supersymmetric Standard Model (pMSSM) framework as a possible source for these photons. We find that the same parameter ranges in pMSSM as reported by an earlier paper to be consistent with the Galactic center excess, is also consistent with the excess observed in Reticulum II, resulting in a J-factor of log10(J(αint=0.5deg))(20.320.5)0.3+0.2\log_{10}(J(\alpha_{int}=0.5 deg)) \simeq (20.3-20.5)^{+0.2}_{-0.3}. This J-factor is consistent with log10(J(αint=0.5deg))=19.50.6+1.0\log_{10}(J(\alpha_{int}=0.5 deg)) = 19.5^{+1.0}_{-0.6} GeV2^2cm5^{-5}, which is derived using an optimized spherical Jeans analysis of kinematic data obtained from the Michigan/Magellan Fiber System (M2FS).Comment: 4 pages, 2 figures, accepted in JCA

    Analyzing {\gamma}-rays of the Galactic Center with Deep Learning

    Get PDF
    We present a new method to interpret the γ\gamma-ray data of our inner Galaxy as measured by the Fermi Large Area Telescope (Fermi LAT). We train and test convolutional neural networks with simulated Fermi-LAT images based on models tuned to real data. We use this method to investigate the origin of an excess emission of GeV γ\gamma-rays seen in previous studies. Interpretations of this excess include γ\gamma rays created by the annihilation of dark matter particles and γ\gamma rays originating from a collection of unresolved point sources, such as millisecond pulsars. Our new method allows precise measurements of the contribution and properties of an unresolved population of γ\gamma-ray point sources in the interstellar diffuse emission model.Comment: 24 pages, 11 figure

    SPOT: Open Source framework for scientific data repository and interactive visualization

    Get PDF
    SPOT is an open source and free visual data analytics tool for multi-dimensional data-sets. Its web-based interface allows a quick analysis of complex data interactively. The operations on data such as aggregation and filtering are implemented. The generated charts are responsive and OpenGL supported. It follows FAIR principles to allow reuse and comparison of the published data-sets. The software also support PostgreSQL database for scalability

    A description of the Galactic Center excess in the Minimal Supersymmetric Standard Model

    Get PDF
    Observations with the Fermi Large Area Telescope (LAT) indicate an excess in gamma rays originating from the center of our Galaxy. A possible explanation for this excess is the annihilation of Dark Matter particles. We have investigated the annihilation of neutralinos as Dark Matter candidates within the phenomenological Minimal Supersymmetric Standard Model (pMSSM). An iterative particle filter approach was used to search for solutions within the pMSSM. We found solutions that are consistent with astroparticle physics and collider experiments, and provide a fit to the energy spectrum of the excess. The neutralino is a Bino/Higgsino or Bino/Wino/Higgsino mixture with a mass in the range 849284-92~GeV or 879787-97~GeV annihilating into W bosons. A third solutions is found for a neutralino of mass 174187174-187~GeV annihilating into top quarks. The best solutions yield a Dark Matter relic density 0.06<Ωh2<0.130.06 < \Omega h^2 <0.13. These pMSSM solutions make clear forecasts for LHC, direct and indirect DM detection experiments. If the MSSM explanation of the excess seen by Fermi-LAT is correct, a DM signal might be discovered soon.Comment: Large extension of previous paper: 2 more solutions found in the MSSM (Bino-Higgsino, Bino-Wino-Higgsino into WW and Bino into ttbar), added description on extra fit uncertainties, added description on flavor observables, added discussion on dwarf limit

    Event Generation and Statistical Sampling for Physics with Deep Generative Models and a Density Information Buffer

    Get PDF
    We present a study for the generation of events from a physical process with deep generative models. The simulation of physical processes requires not only the production of physical events, but also to ensure these events occur with the correct frequencies. We investigate the feasibility of learning the event generation and the frequency of occurrence with Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) to produce events like Monte Carlo generators. We study three processes: a simple two-body decay, the processes e+eZl+le^+e^-\to Z \to l^+l^- and ppttˉp p \to t\bar{t} including the decay of the top quarks and a simulation of the detector response. We find that the tested GAN architectures and the standard VAE are not able to learn the distributions precisely. By buffering density information of encoded Monte Carlo events given the encoder of a VAE we are able to construct a prior for the sampling of new events from the decoder that yields distributions that are in very good agreement with real Monte Carlo events and are generated several orders of magnitude faster. Applications of this work include generic density estimation and sampling, targeted event generation via a principal component analysis of encoded ground truth data, anomaly detection and more efficient importance sampling, e.g. for the phase space integration of matrix elements in quantum field theories.Comment: 24 pages, 10 figure

    Understanding phase contrast artefacts in micro computed absorption tomography

    Get PDF
    Phase contrast imaging is a technique which captures objects with little or no light absorption. This is possible due to the wave nature of light, i.e., diffraction. In computerised tomography, the aim is most often to reconstruct the light absorption property of objects but many objects can not be imaged without obtaining a mix of both absorption and phase, this is especially true for weakly absorbing objects at high resolution. Hence, phase contrast is usually considered an unwanted artefact which should be removed. Traditionally this is done directly on the projection data prior to the filtered back projection algorithm and the filter settings are derived from the physical setup of the imaging device. In this paper we show how these operations can be carried out on the reconstructed data, without access to the projection images, which yields much flexibility over previous approaches. Especially, filtering can be applied to small regions of interest which simplifies fine tuning of parameters, and some low pass filtering can be avoided which is inherent in previous methods. We will also show the filter parameters can be estimated from step edges in the reconstructed images
    corecore